Current:Home > StocksExperts issue a dire warning about AI and encourage limits be imposed -AssetTrainer
Experts issue a dire warning about AI and encourage limits be imposed
View
Date:2025-04-27 13:48:29
A statement from hundreds of tech leaders carries a stark warning: artificial intelligence (AI) poses an existential threat to humanity. With just 22 words, the statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Among the tech leaders, CEOs and scientists who signed the statement that was issued Tuesday is Scott Niekum, an associate professor who heads the Safe, Confident, and Aligned Learning + Robotics (SCALAR) lab at the University of Massachusetts Amherst.
Niekum tells NPR's Leila Fadel on Morning Edition that AI has progressed so fast that the threats are still uncalculated, from near-term impacts on minority populations to longer-term catastrophic outcomes. "We really need to be ready to deal with those problems," Niekum said.
This interview has been edited for length and clarity.
Interview Highlights
Does AI, if left unregulated, spell the end of civilization?
"We don't really know how to accurately communicate to AI systems what we want them to do. So imagine I want to teach a robot how to jump. So I say, "Hey, I'm going to give you a reward for every inch you get off the ground." Maybe the robot decides just to go grab a ladder and climb up it and it's accomplished the goal I set out for it. But in a way that's very different from what I wanted it to do. And that maybe has side effects on the world. Maybe it's scratched something with the ladder. Maybe I didn't want it touching the ladder in the first place. And if you swap out a ladder and a robot for self-driving cars or AI weapon systems or other things, that may take our statements very literally and do things very different from what we wanted.
Why would scientists have unleashed AI without considering the consequences?
There are huge upsides to AI if we can control it. But one of the reasons that we put the statement out is that we feel like the study of safety and regulation of AI and mitigation of the harms, both short-term and long-term, has been understudied compared to the huge gain of capabilities that we've seen...And we need time to catch up and resources to do so.
What are some of the harms already experienced because of AI technology?
A lot of them, unfortunately, as many things do, fall with a higher burden on minority populations. So, for example, facial recognition systems work more poorly on Black people and have led to false arrests. Misinformation has gotten amplified by these systems...But it's a spectrum. And as these systems become more and more capable, the types of risks and the levels of those risks almost certainly are going to continue to increase.
AI is such a broad term. What kind of technology are we talking about?
AI is not just any one thing. It's really a set of technologies that allow us to get computers to do things for us, often by learning from data. This can be things as simple as doing elevator scheduling in a more efficient way, or ambulance versus ambulance figuring out which one to dispatch based on a bunch of data we have about the current state of affairs in the city or of the patients.
It can go all the way to the other end of having extremely general agents. So something like ChatGPT where it operates in the domain of language where you can do so many different things. You can write a short story for somebody, you can give them medical advice. You can generate code that could be used to hack and bring up some of these dangers. And what many companies are interested in building is something called AGI, artificial general intelligence, which colloquially, essentially means that it's an AI system that can do most or all of the tasks that a human can do at least at a human level.
veryGood! (25)
Related
- Cincinnati Bengals quarterback Joe Burrow owns a $3 million Batmobile Tumbler
- Twitter removes all labels about government ties from NPR and other outlets
- Lead Poisonings of Children in Baltimore Are Down, but Lead Contamination Still Poses a Major Threat, a New Report Says
- The best picket signs of the Hollywood writers strike
- Rams vs. 49ers highlights: LA wins rainy defensive struggle in key divisional game
- Great Scott! 30 Secrets About Back to the Future Revealed
- Twitter once muzzled Russian and Chinese state propaganda. That's over now
- SpaceX wants this supersized rocket to fly. But will investors send it to the Moon?
- Which apps offer encrypted messaging? How to switch and what to know after feds’ warning
- Pennsylvania’s Dairy Farmers Clamor for Candidates Who Will Cut Environmental Regulations
Ranking
- The company planning a successor to Concorde makes its first supersonic test
- What's the Commonwealth good for?
- San Francisco is repealing its boycott of anti-LGBT states
- Complex Models Now Gauge the Impact of Climate Change on Global Food Production. The Results Are ‘Alarming’
- Meet first time Grammy nominee Charley Crockett
- When you realize your favorite new song was written and performed by ... AI
- Why Bachelor Nation's Tayshia Adams Has Become More Private Since Her Split With Zac Clark
- Meet the 'financial hype woman' who wants you to talk about money
Recommendation
US wholesale inflation accelerated in November in sign that some price pressures remain elevated
In the San Francisco Bay Area, the Pandemic Connects Rural Farmers and Urban Communities
1000-Lb Sisters Star Tammy Slaton Mourns Death of Husband Caleb Willingham at 40
San Francisco is repealing its boycott of anti-LGBT states
McKinsey to pay $650 million after advising opioid maker on how to 'turbocharge' sales
The banking system that loaned billions to SVB and First Republic
Biden administration warns consumers to avoid medical credit cards
Tucker Carlson ousted at Fox News following network's $787 million settlement